DeepSeek s Nonsense and "Hallucinations" — Baidu Chairman Vents His Frustration

Photo: Data photo of China's AI company DeepSeek. (Song Biling/Dajiyuan)

[People News] According to Dajiyuan, DeepSeek, a domestic Chinese AI model that was heavily promoted by official media earlier this year, has recently been criticised by Baidu's chairman, Robin Li (Li Yanhong). As DeepSeek-R1 spread rapidly across mainland China after its launch, problems with the model became increasingly apparent. Users quickly found that the information provided by this AI tool was filled with falsehoods and even fabricated content, seriously polluting the Chinese-language online information environment.

Baidu Chairman Vents His Frustration

According to reports from Observer and Sina, Robin Li said at Baidu's AI Developers Conference on April 25 that DeepSeek can only process single-text inputs and cannot yet understand multimedia content such as voice, images, or video. "Its bigger problems are that it is slow and expensive," he said, and that it has a high "hallucination rate," making it unreliable in many scenarios.

At the conference, Robin Li stated that in the past year, the world of large AI models has undergone rapid change. On one hand, companies producing large models are fiercely competing with each other; on the other, developers are at a loss and afraid to confidently create applications.

Robin Li emphasised, "Without applications, chips and models have no value."

He pointed out that a major obstacle for developers working on AI applications today is the high cost of using large models, making them unaffordable.

DeepSeek, a startup based in Hangzhou, China, released its open-source inference model R1 in January this year. At the time, official media declared that DeepSeek had surpassed companies like OpenAI.

However, as users and investigators around the world conducted deeper investigations into DeepSeek, they found it was far from what the early promotions claimed, and even had many security vulnerabilities. Governments and hundreds of companies in Taiwan, Japan, South Korea, the U.S., Canada, Italy, Australia, the Netherlands, and other countries have since banned the use of DeepSeek on government and corporate devices.

On February 8, AI security experts revealed results from deep security tests of DeepSeek to the media. They found that DeepSeek is more susceptible to "jailbreaking" — the bypassing of built-in safety restrictions — than ChatGPT, Gemini, or Claude, making it easier to generate dangerous or illegal content.

In March, AI Infra company Luxun Technology, affiliated with Tsinghua University and the first to offer DeepSeek APIs and cloud services, announced it would suspend these services. Founder You Yang revealed in a post that DeepSeek's real costs were much higher than theoretical estimates. After facing cyberbullying, he publicly stated that it is impossible for DeepSeek in the short term to avoid relying on American technology, asking, "Why can't we just tell the truth?"

According to Sina Technology, on March 4, DeepSeek announced on March 1 that its online system's theoretical profit margin was 545%. Shortly afterwards, Luxun Technology announced via official channels that it would stop providing DeepSeek API services within a week, advising users to quickly use up their balances.

The company has not publicly disclosed the reasons for halting DeepSeek services, but You Yang’s extensive posts analysing DeepSeek’s costs make it clear that cost was a key factor.

Nonsense Presented with a Straight Face

According to New Tang Dynasty (NTD) TV, many users found that when using DeepSeek to retrieve information or write articles, the data it provided often had unclear sources, serious logical flaws, and frequent outright errors or fabrications. After simple fact-checking, users were shocked by DeepSeek’s tendency to "speak nonsense with a straight face."

For example, when asked about the director and team behind the popular Chinese animated movie Nezha: The Devil's Birth (also referred to as Nezha 2) on the Q&A platform Zhihu, DeepSeek answered — but users found that the answer falsely claimed that a transformation scene featuring the character Ao Bing caused a sensation at the Annecy International Animation Film Festival in France. In reality, the film that was sent to the Annecy festival was New Gods: Nezha Reborn, produced by Light Chaser Animation, not Nezha 2, directed by "Jiaozi," and there was no such transformation scene. Other details about employees receiving housing and breakthroughs in underwater fluid effects were also entirely fabricated by DeepSeek.

Another example: A user asked DeepSeek to write an article about how Harry Potter helps English learning. The AI produced a fluent article citing five academic papers. However, upon checking, four of the cited papers were completely fictional, and the only real one did not even mention Harry Potter.

In a third case, someone used DeepSeek to generate fictional historical data to set a trap for the Chinese history blogger "Zhibei You." The AI mixed up the death order of two historical figures, causing the blogger to become suspicious of the "new historical material" offered and eventually debunk it after days of painstaking verification.

In yet another example, users discovered that DeepSeek fabricated Alibaba’s earnings data when asked.

"Hallucination" Problems Cannot Be Ignored

Even more disappointing to users was DeepSeek's frequent errors in legal content. For instance, it wrongly cited the Chinese standard code for Information Technology – Artificial Intelligence Terminology as GB/T 5271.31–2023 instead of the correct GB/T 41867–2022, with the former code not even existing. When analysing legal issues, DeepSeek often cited laws that were irrelevant, outdated, or even repealed, leading to seemingly plausible but completely incorrect conclusions, revealing serious flaws in its reasoning abilities.

On March 5, the WeChat public account "Shanzi" commented that due to the lack of real legal reasoning capability, the legal information provided by DeepSeek could seriously mislead users who lack professional knowledge but rely on AI for advice, possibly causing unexpected consequences.

The article stated, "DeepSeek does show certain abilities in some areas, but its 'hallucination' problem cannot be ignored. It is like a stubborn 'mule,' outputting content at will regardless of facts and logic. We cannot overlook the potential risks it poses just because it offers temporary convenience."

It is worth noting that the "hallucination" problem of AI tools appears across various fields, and it is particularly severe in areas with high public interest, such as politics, history, culture, and entertainment. Chinese netizens are actively discussing these issues.

One user from Jiangsu analysed, "DeepSeek's data sources are a messy integration of the internet and self-media content. Without proper discrimination and with added fabricated reasoning, errors and falsehoods abound." A Shanghai user remarked, "Even if you ask the same question on different phones, the answers are different."